我们介绍了一个基于距离的神经网络模型,以进行回归,其中预测不确定性通过真实线上的信念函数量化。该模型将输入矢量与原型的距离解释为以高斯随机模糊数(GRFN)表示的证据,并由广义产品交叉路口规则组合,这是一种将Dempster规则扩展到随机模糊集的操作员。网络输出是一个GRFN,可以通过三个数字来概括,这些数字表征了最合理的预测值,该值周围的可变性以及认知不确定性。与最先进的证据和统计学习算法相比,使用真实数据集的实验证明了该方法的表现非常好。\关键字{证据理论,dempster-shafer理论,信念功能,机器学习,随机模糊集。
translated by 谷歌翻译
由于信息源通常不完美,因此有必要考虑其在多源信息融合任务中的可靠性。在本文中,我们提出了一个新的深层框架,使我们能够使用Dempster-Shafer理论的形式合并多MR图像分割结果,同时考虑到相对于不同类别的不同模式的可靠性。该框架由编码器折线功能提取模块组成,该模块是每个模态在每个体素上计算信念函数的证据分割模块,以及多模式的证据融合模块,该模块为每个模态证据和每个模态证据和折现率分配使用Dempster规则结合折扣证据。整个框架是通过根据折扣骰子指数最小化新的损失功能来培训的,以提高细分精度和可靠性。该方法在1251例脑肿瘤患者的Brats 2021数据库中进行了评估。定量和定性的结果表明,我们的方法表现优于最新技术,并实现了在深神经网络中合并多信息的有效新想法。
translated by 谷歌翻译
The investigation of uncertainty is of major importance in risk-critical applications, such as medical image segmentation. Belief function theory, a formal framework for uncertainty analysis and multiple evidence fusion, has made significant contributions to medical image segmentation, especially since the development of deep learning. In this paper, we provide an introduction to the topic of medical image segmentation methods using belief function theory. We classify the methods according to the fusion step and explain how information with uncertainty or imprecision is modeled and fused with belief function theory. In addition, we discuss the challenges and limitations of present belief function-based medical image segmentation and propose orientations for future research. Future research could investigate both belief function theory and deep learning to achieve more promising and reliable segmentation results.
translated by 谷歌翻译
We introduce a general theory of epistemic random fuzzy sets for reasoning with fuzzy or crisp evidence. This framework generalizes both the Dempster-Shafer theory of belief functions, and possibility theory. Independent epistemic random fuzzy sets are combined by the generalized product-intersection rule, which extends both Dempster's rule for combining belief functions, and the product conjunctive combination of possibility distributions. We introduce Gaussian random fuzzy numbers and their multi-dimensional extensions, Gaussian random fuzzy vectors, as practical models for quantifying uncertainty about scalar or vector quantities. Closed-form expressions for the combination, projection and vacuous extension of Gaussian random fuzzy numbers and vectors are derived.
translated by 谷歌翻译
提出了一种基于Dempster-Shafer理论和深度学习的自动证据分割方法,以从三维正电子发射断层扫描(PET)和计算机断层扫描(CT)图像中分割淋巴瘤。该体系结构由深度功能萃取模块和证据层组成。功能提取模块使用编码器框架框架从3D输入中提取语义特征向量。然后,证据层在特征空间中使用原型来计算每个体素的信念函数,以量化有关该位置存在或不存在淋巴瘤的不确定性。基于使用距离的不同方式,比较了两个证据层,以计算质量函数。通过最大程度地减少骰子损失函数,对整个模型进行了训练。表明,深度提取和证据分割的建议组合表现出优于基线UNET模型以及173名患者的数据集中的其他三个最先进的模型。
translated by 谷歌翻译
对未标记的声发射(AE)数据的解释经典依赖于通用聚类方法。虽然过去已经使用了几种外部标准来选择这些算法的超参数,但很少有研究关注能够应对AE数据特异性的聚类方法中专用目标功能的发展。我们研究了如何在混合模型中,尤其是高斯混合模型(GMM)中明确表示簇的爆炸。通过修改此类模型的内部标准,我们提出了第一种聚类方法,能够通过预期最大化过程估算的参数提供有关何时发生簇的信息(ONESET),它们如何生长(动力学)及其通过它们的生长水平及其通过其激活水平时间。这种新的目标函数可容纳AE信号的连续时间戳,从而适应其发生的顺序。该方法称为GMMSEQ,经过实验验证,以表征振动下螺栓结构中的松动现象。与来自五个实验活动的原始流数据数据的三种标准聚类方法的比较表明,GMMSEQ不仅提供了有关簇时间线的有用定性信息,而且还显示出在群集表征方面更好的性能。鉴于制定开放的声学倡议并根据公平原则,数据集和代码可用于复制本文的研究。
translated by 谷歌翻译
高斯混合物模型(GMM)提供了一个简单而原则的框架,具有适用于统计推断的属性。在本文中,我们提出了一种新的基于模型的聚类算法,称为EGMM(证据GMM),在信念函数的理论框架中,以更好地表征集群成员的不确定性。通过代表每个对象的群集成员的质量函数,提出了由所需群集的功率组组成的组件组成的证据高斯混合物分布来对整个数据集进行建模。 EGMM中的参数通过特殊设计的预期最大化(EM)算法估算。还提供了允许自动确定正确数量簇的有效性指数。所提出的EGMM与经典GMM一样简单,但可以为所考虑的数据集生成更有信息的证据分区。合成和真实数据集实验表明,所提出的EGMM的性能比其他代表性聚类算法更好。此外,通过应用多模式脑图像分割的应用也证明了其优势。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.
translated by 谷歌翻译
We present a novel neural model for modern poetry generation in French. The model consists of two pretrained neural models that are fine-tuned for the poem generation task. The encoder of the model is a RoBERTa based one while the decoder is based on GPT-2. This way the model can benefit from the superior natural language understanding performance of RoBERTa and the good natural language generation performance of GPT-2. Our evaluation shows that the model can create French poetry successfully. On a 5 point scale, the lowest score of 3.57 was given by human judges to typicality and emotionality of the output poetry while the best score of 3.79 was given to understandability.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been successfully applied in many applications in computer sciences. Despite the success of deep learning architectures in other domains, deep GNNs still underperform their shallow counterparts. There are many open questions about deep GNNs, but over-smoothing and over-squashing are perhaps the most intriguing issues. When stacking multiple graph convolutional layers, the over-smoothing and over-squashing problems arise and have been defined as the inability of GNNs to learn deep representations and propagate information from distant nodes, respectively. Even though the widespread definitions of both problems are similar, these phenomena have been studied independently. This work strives to understand the underlying relationship between over-smoothing and over-squashing from a topological perspective. We show that both problems are intrinsically related to the spectral gap of the Laplacian of the graph. Therefore, there is a trade-off between these two problems, i.e., we cannot simultaneously alleviate both over-smoothing and over-squashing. We also propose a Stochastic Jost and Liu curvature Rewiring (SJLR) algorithm based on a bound of the Ollivier's Ricci curvature. SJLR is less expensive than previous curvature-based rewiring methods while retaining fundamental properties. Finally, we perform a thorough comparison of SJLR with previous techniques to alleviate over-smoothing or over-squashing, seeking to gain a better understanding of both problems.
translated by 谷歌翻译